You are here: Planning the Model > Steps for Doing Simulation > Step 4: Conducting Experiments
The fourth step in a simulation study is to conduct simulation experiments with the model. Simulation is basically an application of the scientific method. In simulation, one begins with a theory of why certain design rules or management strategies are better than others. Based on these theories the designer develops a hypothesis which he tests through simulation. Based on the results of the simulation the designer draws conclusions about the validity of his hypothesis. In a simulation experiment there are input variables defining the model which are independent and may be manipulated or varied. The effects of this manipulation on other dependent or response variables are measured and correlated.
In some simulation experiments we are interested in the steady-state behavior of the model. Steady-state behavior does not mean that the simulation produces a steady outcome, but rather the distribution or statistical variation in outcome does not change over time. For example, a distribution warehouse may ship between 200 and 220 parts per hour under normal operating conditions. For many simulations we may only be interested in a particular time period, such as a specific day of the week. For these studies, the simulation may never reach a steady state.
As with any experiment involving a system with random characteristics, the results of the simulation will also be random in nature. The results of a single simulation run represent only one of several possible outcomes. This requires that multiple replications be run to test the reproducibility of the results. Otherwise, a decision might be made based on a fluke outcome, or at least an outcome not representative of what would normally be expected. Since simulation utilizes a pseudo-random number generator for generating random numbers, running the simulation multiple times simply reproduces the same sample. In order to get an independent sample, the starting seed value for each random stream must be different for each replication, ensuring that the random numbers generated from replication to replication are independent.
Depending on the degree of precision required in the output, it may be desirable to determine a confidence interval for the output. A confidence interval is a range within which we can have a certain level of confidence that the true mean falls. For a given confidence level or probability, say .90 or 90%, a confidence interval for the average utilization of a resource might be determined to be between 75.5 and 80.8%. We would then be able to say that there is a .90 probability that the true mean utilization of the modeled resource (not of the actual resource) lies between 75.5 and 80.8%.
Fortunately, ProModel provides convenient facilities for conducting experiments, running multiple replications and automatically calculating confidence intervals. The modeler must still decide, however, what types of experimentation are appropriate. When conducting simulation experiments, the following questions should be asked:
Am I interested in the steady state behavior of the system or a specific period of operation?
How can I eliminate start-up bias or getting the right initial condition for the model?
What is the best method for obtaining sample observations that may be used to estimate the true expected behavior of the model?
What is an appropriate run length for the simulation?
How many replications should be made?
How many different random streams should be used?
How should initial seed values be controlled from replication to replication?
Answers to these questions will be determined largely by the following three factors:
1. The nature of the simulation (terminating or nonterminating).
2. The objective of the simulation (capacity analysis, alternative comparisons, etc.).
3. The precision required (rough estimate versus confidence interval estimates).